minimax risk
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Hawaii (0.04)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.41)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > France (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada (0.04)
- Europe > Spain > Basque Country > Biscay Province > Bilbao (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)
Information-theoretic Limits of Online Classification with Noisy Labels
We study online classification with general hypothesis classes where the true labels are determined by some function within the class, but are corrupted by stochastic noise, and the features are generated adversarially. Predictions are made using observed labels and noiseless features, while the performance is measured via minimax risk when comparing against labels. The noisy mechanism is modeled via a general noisy kernel that specifies, for any individual data point, a set of distributions from which the actual noisy label distribution is chosen. We show that minimax risk is characterized (up to a logarithmic factor of the hypothesis class size) by the of the noisy label distributions induced by the kernel, of other properties such as the means and variances of the noise. Our main technique is based on a novel reduction to an online comparison scheme of two hypotheses, along with a new version of Le Cam-Birgé testing suitable for online settings. Our work provides the first comprehensive characterization of noisy online classification with guarantees that apply to the while addressing noisy observations.
Towards Sharp Minimax Risk Bounds for Operator Learning
Adcock, Ben, Maier, Gregor, Parhi, Rahul
A new paradigm in machine learning for scientific computing is focused on designing learning algorithms and methods for continuum problems. This paradigm is referred to as operator learning and has received considerable interest in the last few years [5,7,18,20,23-25,27,30,34,36]. The basic task may be posed as learning a map between infinite-dimensional function spaces, i.e., learning an operator F: X Y, where, for example, X and Y are real, separable Hilbert spaces. Operator learning naturally arises in many scientific problems where one wants to learn how a continuum model, often described by partial differential equations (PDEs), maps inputs, such as parameters or boundary conditions, to outputs, such as states or observables. A prototypical example to keep in mind is learning parameter-to-solution maps of parametric PDEs [1,2,11]. In contrast to more classical surrogate modeling, which typically focuses on learning finite-dimensional parameter-to-solution maps for some fixed discretization, operator learning directly aims to learn/approximate the continuum map F: X Y itself. Thus, the inputs and outputs are functions (not vectors) and the goal is to directly design discretization-invariant methods [7,23]. From a statistical perspective, this naturally leads to a nonparametric regression problem in which both the object of interest (the operator) and the observations (finite number of noisy samples) are infinite-dimensional.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Europe > Germany > North Rhine-Westphalia > Arnsberg Region > Dortmund (0.04)
Learning the score under shape constraints
Lewis, Rebecca M., Feng, Oliver Y., Reeve, Henry W. J., Xu, Min, Samworth, Richard J.
Score estimation has recently emerged as a key modern statistical challenge, due to its pivotal role in generative modelling via diffusion models. Moreover, it is an essential ingredient in a new approach to linear regression via convex $M$-estimation, where the corresponding error densities are projected onto the log-concave class. Motivated by these applications, we study the minimax risk of score estimation with respect to squared $L^2(P_0)$-loss, where $P_0$ denotes an underlying log-concave distribution on $\mathbb{R}$. Such distributions have decreasing score functions, but on its own, this shape constraint is insufficient to guarantee a finite minimax risk. We therefore define subclasses of log-concave densities that capture two fundamental aspects of the estimation problem. First, we establish the crucial impact of tail behaviour on score estimation by determining the minimax rate over a class of log-concave densities whose score function exhibits controlled growth relative to the quantile levels. Second, we explore the interplay between smoothness and log-concavity by considering the class of log-concave densities with a scale restriction and a $(β,L)$-Hölder assumption on the log-density for some $β\in [1,2]$. We show that the minimax risk over this latter class is of order $L^{2/(2β+1)}n^{-β/(2β+1)}$ up to poly-logarithmic factors, where $n$ denotes the sample size. When $β< 2$, this rate is faster than could be obtained under either the shape constraint or the smoothness assumption alone. Our upper bounds are attained by a locally adaptive, multiscale estimator constructed from a uniform confidence band for the score function. This study highlights intriguing differences between the score estimation and density estimation problems over this shape-constrained class.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > New Zealand (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.34)
Performance Guarantees for Quantum Neural Estimation of Entropies
Sreekumar, Sreejith, Goldfeld, Ziv, Wilde, Mark M.
Estimating quantum entropies and divergences is an important problem in quantum physics, information theory, and machine learning. Quantum neural estimators (QNEs), which utilize a hybrid classical-quantum architecture, have recently emerged as an appealing computational framework for estimating these measures. Such estimators combine classical neural networks with parametrized quantum circuits, and their deployment typically entails tedious tuning of hyperparameters controlling the sample size, network architecture, and circuit topology. This work initiates the study of formal guarantees for QNEs of measured (Rényi) relative entropies in the form of non-asymptotic error risk bounds. We further establish exponential tail bounds showing that the error is sub-Gaussian, and thus sharply concentrates about the ground truth value. For an appropriate sub-class of density operator pairs on a space of dimension $d$ with bounded Thompson metric, our theory establishes a copy complexity of $O(|Θ(\mathcal{U})|d/ε^2)$ for QNE with a quantum circuit parameter set $Θ(\mathcal{U})$, which has minimax optimal dependence on the accuracy $ε$. Additionally, if the density operator pairs are permutation invariant, we improve the dimension dependence above to $O(|Θ(\mathcal{U})|\mathrm{polylog}(d)/ε^2)$. Our theory aims to facilitate principled implementation of QNEs for measured relative entropies and guide hyperparameter tuning in practice.
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > New Mexico (0.04)
- (5 more...)
- Asia > Middle East > Jordan (0.05)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States > Michigan > Wayne County > Detroit (0.04)
- North America > United States > Maryland (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > Middle East > Iraq > Baghdad Governorate > Baghdad (0.04)